Goto

Collaborating Authors

 human condition


When AI companions become witty: Can human brain recognize AI-generated irony?

Rao, Xiaohui, Wu, Hanlin, Cai, Zhenguang G.

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) are increasingly deployed as social agents and trained to produce humor and irony, a question emerges: when encountering witty AI remarks, do people interpret these as intentional communication or mere computational output? This study investigates whether people adopt the intentional stance, attributing mental states to explain behavior,toward AI during irony comprehension. Irony provides an ideal paradigm because it requires distinguishing intentional contradictions from unintended errors through effortful semantic reanalysis. We compared behavioral and neural responses to ironic statements from AI versus human sources using established ERP components: P200 reflecting early incongruity detection and P600 indexing cognitive efforts in reinterpreting incongruity as deliberate irony. Results demonstrate that people do not fully adopt the intentional stance toward AI-generated irony. Behaviorally, participants attributed incongruity to deliberate communication for both sources, though significantly less for AI than human, showing greater tendency to interpret AI incongruities as computational errors. Neural data revealed attenuated P200 and P600 effects for AI-generated irony, suggesting reduced effortful detection and reanalysis consistent with diminished attribution of communicative intent. Notably, people who perceived AI as more sincere showed larger P200 and P600 effects for AI-generated irony, suggesting that intentional stance adoption is calibrated by specific mental models of artificial agents. These findings reveal that source attribution shapes neural processing of social-communicative phenomena. Despite current LLMs' linguistic sophistication, achieving genuine social agency requires more than linguistic competence, it necessitates a shift in how humans perceive and attribute intentionality to artificial agents.


A funny companion: Distinct neural responses to perceived AI- versus human-generated humor

Rao, Xiaohui, Wu, Hanlin, Cai, Zhenguang G.

arXiv.org Artificial Intelligence

As AI companions become capable of human-like communication, including telling jokes, understanding how people cognitively and emotionally respond to AI humor becomes increasingly important. This study used electroencephalography (EEG) to compare how people process humor from AI versus human sources. Behavioral analysis revealed that participants rated AI and human humor as comparably funny. However, neurophysiological data showed that AI humor elicited a smaller N400 effect, suggesting reduced cognitive effort during the processing of incongruity. This was accompanied by a larger Late Positive Potential (LPP), indicating a greater degree of surprise and emotional response. This enhanced LPP likely stems from the violation of low initial expectations regarding AI's comedic capabilities. Furthermore, a key temporal dynamic emerged: human humor showed habituation effects, marked by an increasing N400 and a decreasing LPP over time. In contrast, AI humor demonstrated increasing processing efficiency and emotional reward, with a decreasing N400 and an increasing LPP. This trajectory reveals how the brain can dynamically update its predictive model of AI capabilities. This process of cumulative reinforcement challenges "algorithm aversion" in humor, as it demonstrates how cognitive adaptation to AI's language patterns can lead to an intensified emotional reward. Additionally, participants' social attitudes toward AI modulated these neural responses, with higher perceived AI trustworthiness correlating with enhanced emotional engagement. These findings indicate that the brain responds to AI humor with surprisingly positive and intense reactions, highlighting humor's potential for fostering genuine engagement in human-AI social interaction.


Efficacy of a Computer Tutor that Models Expert Human Tutors

Olney, Andrew M., D'Mello, Sidney K., Person, Natalie, Cade, Whitney, Hays, Patrick, Dempsey, Claire W., Lehman, Blair, Williams, Betsy, Graesser, Art

arXiv.org Artificial Intelligence

Tutoring is highly effective for promoting learning. However, the contribution of expertise to tutoring effectiveness is unclear and continues to be debated. We conducted a 9-week learning efficacy study of an intelligent tutoring system (ITS) for biology modeled on expert human tutors with two control conditions: human tutors who were experts in the domain but not in tutoring and a no-tutoring condition. All conditions were supplemental to classroom instruction, and students took learning tests immediately before and after tutoring sessions as well as delayed tests 1-2 weeks later. Analysis using logistic mixed-effects modeling indicates significant positive effects on the immediate post-test for the ITS (d =.71) and human tutors (d =.66) which are in the 99th percentile of meta-analytic effects, as well as significant positive effects on the delayed post-test for the ITS (d =.36) and human tutors (d =.39). We discuss implications for the role of expertise in tutoring and the design of future studies.


LLMs and the Human Condition

Wallis, Peter

arXiv.org Artificial Intelligence

This paper presents three established theories of human decision-making and describes how they can be integrated to provide a model of purposive human action. Taking seriously the idea of language as action the model is then applied to the conversational user interfaces. Theory based AI research has had a hard time recently and the aim here is to revitalise interest in understanding what LLMs are actually doing other than running poorly understood machine learning routines over all the data the relevant Big Tech company can hoover up. When a raspberry pi computer for under 50USD is up to 400 times faster than the first commercial Cray super computer~\cite{crayVpi}, Big Tech can get really close to having an infinite number of monkeys typing at random and producing text, some of which will make sense. By understanding where ChatGPT's apparent intelligence comes from, perhaps we can perform the magic with fewer resources and at the same time gain some understanding about our relationship with our world.


Will artificial intelligence change what it means to be human? – The Mail & Guardian

#artificialintelligence

Francis Fukuyama famously said in his 1992 book, The End of History and the Last Man, that history had come to an end circa 1989 with the fall of the Berlin Wall. It announced the end of the Cold War, the collapse of Soviet Russia and, generally, of communism as an economic system, and, correlatively, the unopposed global spread of liberal democracy. One hundred and eighty years before Fukuyama, Hegel had said something similar when he saw Napoleon on horseback riding into the town of Jena in 1806. Napoleon was, for many in those early days, the symbol of the spread of freedom through Europe and against the tyranny of monarchy. There is today the suspicion coming from both Marxists and conservatives alike that the imminent transformation of the labour process, its complete automation through robotics and artificial intelligence (AI), will bring about the end of history.


Social influence under uncertainty in interaction with peers, robots and computers

Zonca, Joshua, Folso, Anna, Sciutti, Alessandra

arXiv.org Artificial Intelligence

Taking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others' competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants' belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner's responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human-human and human-robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.


What Ever Happened to the Transhumanists?

#artificialintelligence

Gizmodo is 20 years old! To celebrate the anniversary, we're looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11, I felt spiritually and existentially lost. It's hard to believe now, but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I haven't set foot in a church since, aside from the occasional wedding or baptism. I didn't realize it at the time, but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didn't survive this mental reboot and return to form, but it did lead me to some very positive places, resulting in my adoption of secular Buddhism, meditation, and a decade-long stint with vegetarianism.


Opinion

#artificialintelligence

Humans are an incredibly adaptable species. With the advent of livestock and domestic pets, for example, we invented new symbiotic relations that went on to define our civilization. Artificial intelligence as it exists today is already radically changing our economy, with computers usurping traditionally human professions and humans mobilizing to maintain them. If computers eventually become sentient, we will probably be quick to become more dependent on smart technology as a tool--and also to share a transactional relation with AI, akin to what we have with horses and dogs. Hollywood would have you believe that AI is an existential threat to civilization.


Towards a Real-time Measure of the Perception of Anthropomorphism in Human-robot Interaction

Tsfasman, Maria, Saravanan, Avinash, Viner, Dekel, Goslinga, Daan, de Wolf, Sarah, Raman, Chirag, Jonker, Catholijn M., Oertel, Catharine

arXiv.org Artificial Intelligence

How human-like do conversational robots need to look to enable long-term human-robot conversation? One essential aspect of long-term interaction is a human's ability to adapt to the varying degrees of a conversational partner's engagement and emotions. Prosodically, this can be achieved through (dis)entrainment. While speech-synthesis has been a limiting factor for many years, restrictions in this regard are increasingly mitigated. These advancements now emphasise the importance of studying the effect of robot embodiment on human entrainment. In this study, we conducted a between-subjects online human-robot interaction experiment in an educational use-case scenario where a tutor was either embodied through a human or a robot face. 43 English-speaking participants took part in the study for whom we analysed the degree of acoustic-prosodic entrainment to the human or robot face, respectively. We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.


Ai-Da has an existential crisis

#artificialintelligence

Named after mathematician and computer pioneer Ada Lovelace, Ai-Da is the world’s first humanoid AI robot artist that can create artistic pieces from sight using her robotic eyes and hands. Ai-Da…